5 research outputs found

    Towards building general framework for designing knowledge sharing tool based on actor network theory

    Get PDF
    Purpose: The purpose of this paper is to emphasize the needs to understand the barrier and determinant factors in knowledge sharing (KS), to find the common ones and subsequently to build a general framework that can be referred to in designing a KS tool that addresses the common factors. Design/methodology/approach: The approach comprises of two major steps which are to survey the past literature to determine the most common barriers and determinant factors from various unique KS domains and to qualify the factor as the common one based on its presence in at least three to five KS domains. The grounded theory is used to analyze the past literature and to perform categorization. Findings: This paper helps in the summarization of categories and subcategories of barriers and determinants and demonstration on the mapping between them. Research limitations/implications: This paper has not proved the actual use of the framework in building a KS tool based on the framework. Practical implications: The common factors are based on at least 60 references of KS implementation such that it is useful for large area of application domains that require building KS tools. Originality/value: This paper presents the understanding on the common factors and association between the barriers and determinants in building the general framework in which the application of the framework is demonstrated using actor network theory

    The UAE Employees’ Perceptions towards Factors for Sustaining Big Data Implementation and Continuous Impact on Their Organization’s Performance

    Get PDF
    The UAE has officially launched the Big Data initiative in the year 2022; however, the interest in and adoption of Big Data technologies and strategies had started much earlier in the private and public sectors. This research aims to explore the perceptions of the UAE employees on factors needed to implement sustainable Big Data and the continuous impact on their organizational performance. A total of 257 employees were randomly selected for an online survey, and data were collected using a Likert-style five-point scale that was tested for validity and reliability. The findings indicate that employees believe that Big Data Sustainable Implementation leads to Business Performance. Additionally, employees consider factors such as Big Data Architecture Quality, Human Cognitive Factors, and Organizational Readiness to significantly impact on Sustainable Implementation. Further, a moderating impact of Human Cognitive Factors was found on the relationship between Big Data Architecture Quality and Sustainable Implementation. The study provides managerial insights and recommendations for policymaking

    DBSCAN inspired task scheduling algorithm for cloud infrastructure

    Get PDF
    Cloud computing in today\u27s computing environment plays a vital role, by providing efficient and scalable computation based on pay per use model. To make computing more reliable and efficient, it must be efficient, and high resources utilized. To improve resource utilization and efficiency in cloud, task scheduling and resource allocation plays a critical role. Many researchers have proposed algorithms to maximize the throughput and resource utilization taking into consideration heterogeneous cloud environments. This work proposes an algorithm using DBSCAN (Density-based spatial clustering) for task scheduling to achieve high efficiency. The proposed DBScan-based task scheduling algorithm aims to improve user task quality of service and improve performance in terms of execution time, average start time and finish time. The experiment result shows proposed model outperforms existing ACO and PSO with 13% improvement in execution time, 49% improvement in average start time and average finish time. The experimental results are compared with existing ACO and PSO algorithms for task scheduling

    Fault aware task scheduling in cloud using min-min and DBSCAN

    Get PDF
    Cloud computing leverages computing resources by managing these resources globally in a more efficient manner as compared to individual resource services. It requires us to deliver the resources in a heterogeneous environment and also in a highly dynamic nature. Hence, there is always a risk of resource allocation failure that can maximize the delay in task execution. Such adverse impact in the cloud environment also raises questions on quality of service (QoS). Resource management for cloud application and service have bigger challenges and many researchers have proposed several solutions but there is room for improvement. Clustering the resources clustering and mapping them according to task can also be an option to deal with such task failure or mismanaged resource allocation. Density-based spatial clustering of applications with noise (DBSCAN) is a stochastic approach-based algorithm which has the capability to cluster the resources in a cloud environment. The proposed algorithm considers high execution enabled powerful data centers with least fault probability during resource allocation which reduces the probability of fault and increases the tolerance. The simulation is cone using CloudsSim 5.0 tool kit. The results show 25% average improve in execution time, 6.5% improvement in number of task completed and 3.48% improvement in count of task failed as compared to ACO, PSO, BB-BC (Bib = g bang Big Crunch) and WHO(Whale optimization algorithm)

    An alternative parameter free clustering algorithm using data point positioning analysis (DPPA) – comparison with DBScan

    No full text
    DBSCAN is one of the most popular clustering algorithms that could handle clusters which have characteristics of arbitrary shape, multiple densities and noises. How-ever, its accuracy depends on the right selection of the two parameters, MinPts and Eps. There have been numerous research works to overcome this issue by developing parameter free clustering algorithm. We propose a clustering algorithm which uses Data Point Positioning Analysis (DPPA) to analyze the relationship of each point to all points based on two nearest neighbor concepts, namely 1-NN and Max-NN. The algorithm is applied on 13 benchmark datasets that have been applied in many clustering algorithms with three-dimensional data and subsequently on higher dimensional data with sixteen attributes. The performance of the algorithm is visually compared with the three-dimensional graph plotting at various angles to determine the actual number of clusters. For the higher dimensional data, Silhouette coefficient is used to measure the performance. For both experimental results, the DPPA algorithm is compared against DBSCAN. The results show that the DPPA algorithm is comparable to the performance of DBSCAN algorithm such that it manages to detect arbitrary cluster shapes, identify the number of clusters and manage the data sets with noises
    corecore